Performance of CAP-Specified Linear Algebra Algorithms

نویسندگان

  • Marc Mazzariol
  • Benoit A. Gennart
  • Vincent Messerli
  • Roger D. Hersch
چکیده

The traditional approach to the parallelization of linear algebra algorithms such as matrix multiplication and LU factorization calls for static allocation of matrix blocks to processing elements (PEs). Such algorithms suffer from two drawbacks : they are very sensitive to load imbalances between PEs and they make it difficult to take advantage of pipelining opportunities. This paper describes dynamic versions of linear algebra algorithms, where subtasks (matrix block multiplication, matrix block LU factorization) are dynamically allocated to PEs. It analyses theoretically the performance of the dynamic algorithms. This paper’s contribution is to show that the dynamicpipelined linear-algebra algorithms can be specified compactly in CAP and yet achieve good performance. CAP is a C++ language extension for the specification of parallel applications based on macro-dataflow graphs. The CAP model, based on macro-dataflow graphs, is general and supports pipelining.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Investigating the Effects of Hardware Parameters on Power Consumptions in SPMV Algorithms on Graphics Processing Units (GPUs)

Although Sparse matrix-vector multiplication (SPMVs) algorithms are simple, they include important parts of Linear Algebra algorithms in Mathematics and Physics areas. As these algorithms can be run in parallel, Graphics Processing Units (GPUs) has been considered as one of the best candidates to run these algorithms. In the recent years, power consumption has been considered as one of the metr...

متن کامل

Modeling and forecasting US presidential election using learning algorithms

The primary objective of this research is to obtain an accurate forecasting model for the US presidential election. To identify a reliable model, artificial neural networks (ANN) and support vector regression (SVR) models are compared based on some specified performance measures. Moreover, six independent variables such as GDP, unemployment rate, the president’s approval rate, and others are co...

متن کامل

Implementing Blas Level 3 on the Cap–ii

The Basic Linear Algebra Subprogram (BLAS) library is widely used in many supercomputing applications, and is used to implement more extensive linear algebra subroutine libraries, such as LINPACK and LAPACK. The use of BLAS aids in the clarity, portability and maintenance of mathematical software. BLAS level 1 routines involve vector-vector operations, level 2 routines involve matrix-vector ope...

متن کامل

WZ factorization via Abay-Broyden-Spedicato algorithms

Classes of‎ ‎Abaffy-Broyden-Spedicato (ABS) methods have been introduced for‎ ‎solving linear systems of equations‎. ‎The algorithms are powerful methods for developing matrix‎ ‎factorizations and many fundamental numerical linear algebra processes‎. ‎Here‎, ‎we show how to apply the ABS algorithms to devise algorithms to compute the WZ and ZW‎ ‎factorizations of a nonsingular matrix as well as...

متن کامل

The Matrix Template Library: A Generic Programming Approach to High Performance Numerical Linear Algebra

We present a unified approach for expressing high performance numerical linear algebra routines for large classes of dense and sparse matrices. As with the Standard Template Library [10], we explicitly separate algorithms from data structures through the use of generic programming techniques. We conclude that such an approach does not hinder high performance. On the contrary, writing portable h...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997